Backpropagation training in adaptive quantum networks

نویسندگان

  • Christopher Altman
  • Roman R. Zapatrin
چکیده

We introduce a robust, error-tolerant adaptive training algorithm for generalized learning paradigms in high-dimensional superposed quantum networks, or adaptive quantum networks. The formalized procedure applies standard backpropagation training across a coherent ensemble of discrete topological configurations of individual neural networks, each of which is formally merged into appropriate linear superposition within a predefined, decoherence-free subspace. Quantum parallelism facilitates simultaneous training and revision of the system within this coherent state space, resulting in accelerated convergence to a stable network attractor under consequent iteration of the implemented backpropagation algorithm. Parallel evolution of linear superposed networks incorporating backpropagation training provides quantitative, numerical indications for optimization of both single-neuron activation functions and optimal reconfiguration of whole-network quantum structure.

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Application of Quantum Annealing to Training of Deep Neural Networks

In Deep Learning, a well-known approach for training a Deep Neural Network starts by training a generative Deep Belief Network model, typically using Contrastive Divergence (CD), then fine-tuning the weights using backpropagation or other discriminative techniques. However, the generative training can be time-consuming due to the slow mixing of Gibbs sampling. We investigated an alternative app...

متن کامل

Adaptive Training of Radial Basis Function Networks Based on Cooperative Evolution and Evolutionary Programming

Neuro-fuzzy systems based on Radial Basis Function Networks (RBFN) and other hybrid artificial intelligence techniques are currently under intensive investigation. This paper presents a RBFN training algorithm based on evolutionary programming and cooperative evolution. The algorithm alternatively applies basis function adaptation and backpropagation training until a satisfactory error is achie...

متن کامل

30 Years of Adaptive Neural Networks : Perceptron , Madaline , and Backpropagation

Fundamental developments in feedfonvard artificial neural networks from the past thirty years are reviewed. The central theme of this paper is a description of the history, origination, operating characteristics, and basic theory of several supervised neural network training algorithms including the Perceptron rule, the LMS algorithm, three Madaline rules, and the backpropagation technique. The...

متن کامل

An Adaptive Penalty-Based Learning Extension for the Backpropagation Family

Over the years, many improvements and refinements to the backpropagation learning algorithm have been reported. In this paper, a new adaptive penalty-based learning extension for the backpropagation learning algorithm and its variants is proposed. The new method initially puts pressure on artificial neural networks in order to get all outputs for all training patterns into the correct half of t...

متن کامل

Improving the Convergence of the Backpropagation Algorithm Using Local Adaptive Techniques

Since the presentation of the backpropagation algorithm, a vast variety of improvements of the technique for training a feed forward neural networks have been proposed. This article focuses on two classes of acceleration techniques, one is known as Local Adaptive Techniques that are based on weightspecific only, such as the temporal behavior of the partial derivative of the current weight. The ...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

عنوان ژورنال:

دوره   شماره 

صفحات  -

تاریخ انتشار 2009